Evaluators entering the world of Clinical and Translational Science Award (CTSA) institutes are often struck by the complexity of evaluating these institutions. Each CTSA encompasses several cores, or research service providers (i.e., drug discovery and development, biomedical informatics, or community engagement), with drastically different focuses and, therefore, indicators of progress. Annually, evaluators are challenged with developing integrated reports for the National Institute of Health using data collected in the diverse CTSA cores. This poster presentation will first display how two new-to-CTSA evaluators tackled the task of understanding CTSAs
FINAL Poster_10_4_13.pdf
Abstract: Novel educational offerings aimed at training translational researchers in the skills necessary to pursue collaborative, team-based research have increased with the introduction of the Clinical and Translational Science Awards (CTSA). Yet evaluating team training curriculum and its impact is often challenging because of the untraditional course material, wide range of trainees, and the uniqueness of competencies trainees are expected to obtain. In this presentation, we present our evaluation of the UC Davis Clinical and Translational Science Center team science curriculum
Rainwater & Henderson Team Science AEA 2012.pdf
The Clinical and Translational Science Awards (CTSA) incorporate innovative translational research training programs aimed at producing a diverse cadre of scientists who work collaboratively to rapidly translate biomedical research into clinical applications. Evaluation of these programs that emphasize team science, interdisciplinary research, and acceptance of a range of career trajectories challenge evaluators to develop outcome measures that go beyond simply counting traditional academic products, such as individual publications and grants
AEA 2011 Session 779 Rainwater Griffin and Henderson - Tracking for Translation.pdf
At the UW Institute for Clinical and Translational Research, embedded evaluators use nested logic models to summarize complex layers of intent, assist with program improvement, encourage an evaluative perspective, communicate program achievements, and identify evaluation tasks and metrics
Hogle AEA 2011 session 871.pptx
Case studies are also flexible and responsive to changing needs of the evaluation. This presentation provides examples from the University of Illinois at Chicago Center for Clinical and Translational Science (UIC CCTS) of how case studies are used to support tracking and evaluation activities
Bates Johnson Success Case Study AEA 12 a.pptx
Identifying successful cases is especially important in complex evaluations, such as Clinical and Translational Science Awards, where successes are definitionally complex and often do not develop in linear or predictable ways. For success case studies to be fully effective, however, evaluators need to find innovative ways to communicate them to diverse stakeholders, who may see them as simply anecdotes or unconnected “stories.”
AEA Communicating Success Case Studies public elibrary henderson.pptx